219 research outputs found

    Does the Redundant Signals Effect Occur with Categorical Signals?

    Get PDF
    The redundant signals effect (RSE) refers to a decrease in response time (RT) when multiple signals are present compared to when one signal is present. The RSE is widespread when responses are made to specific signals; for example, a participant who is asked to respond to the letter “N” will respond more quickly to two “Ns” than to one “N.” The current research was conducted to determine whether or not the RSE generalizes to categorical signals. In Experiment 1, participants pressed a button when they saw any number on a computer screen. Each trial contained two stimuli subtending 1º visual angle and placed 3º above and below the center of the screen. Both stimuli were letters on 50% of trials (no-signal condition), one stimulus was a number on 25% of trials (single-signal condition), and both stimuli were numbers on 25% of trials (redundant-signal condition). RT was faster in the redundant-signal condition (461 ms) than in the single-signal condition (509 ms, p \u3c .001), indicating that the RSE occurred. However, Experiment 1 contained noise (a letter) in the single-signal condition; when the noise letter was removed in Experiment 2, the RSE was nonsignificant (redundant-signal RT = 446, single-signal RT = 458 ms, p = .167). Nevertheless, the trend in Experiment 2 was towards a RSE, and the fast RTs may indicate a ceiling effect. For now, the evidence in favor of a categorical RSE is mixed; further research is expected to provide clarity on the issue

    Redundant Signals in the Triple Conjunction Effect

    Get PDF
    The triple conjunction effect (TCE) is characterized by faster response times (RT) when a target is defined by three features than when it is defined by three features. Similarly, the redundant signals effect (RSE) is characterized by faster RTs when a display contains multiple features that are each sufficient to define a target. When a single display element contains multiple target features in separate feature dimensions, the RSE may be attributable to feature coactivation, in which information from multiple features combines to reach a response threshold. Because triple conjunctions contain an extra distinguishing feature, they are comparable to the RSE, and feature coactivation may therefore be expected. In the current study, participants searched for the presence of a target letter in 4 blocks of conjunction search trials (2 of color and orientation, and 2 of form and orientation) and 2 blocks of triple conjunction search trials (color, form, and orientation). Each trial contained 4 or 8 letters subtending 2° by 2° on an invisible circle 8° from the center of the display. Trials were terminated if participants moved their eyes more than 2.75° from the center or did not respond within 4 seconds. A similar second experiment was conducted with distractor homogeneity equated across conjunction and triple conjunction searches. Results indicated that the TCE occurred in both experiments; RTs were ~206 ms faster in triple conjunction than conjunction search. The Townsend Bound, a theoretical minimum for triple conjunction RT under the assumption that no coactivation occurred, was violated at several quantiles (5-16 of 18 quantiles, depending on experiment, set size, and target) when RT was averaged across participants. Additionally, most participants individually violated the Townsend Bound in at least some conditions, providing further evidence for coactivation. The results suggest that the TCE is at least partially due to coactivation of target-relevant features

    Neuro-Symbolic Verification of Deep Neural Networks

    Get PDF

    Specification sketching for Linear Temporal Logic

    Get PDF
    Virtually all verification and synthesis techniques assume that the formal specifications are readily available, functionally correct, and fully match the engineer's understanding of the given system. However, this assumption is often unrealistic in practice: formalizing system requirements is notoriously difficult, error-prone, and requires substantial training. To alleviate this severe hurdle, we propose a fundamentally novel approach to writing formal specifications, named specification sketching for Linear Temporal Logic (LTL). The key idea is that an engineer can provide a partial LTL formula, called an LTL sketch, where parts that are hard to formalize can be left out. Given a set of examples describing system behaviors that the specification should or should not allow, the task of a so-called sketching algorithm is then to complete a given sketch such that the resulting LTL formula is consistent with the examples. We show that deciding whether a sketch can be completed falls into the complexity class NP and present two SAT-based sketching algorithms. We also demonstrate that sketching is a practical approach to writing formal specifications using a prototype implementation

    A Learning-Based Approach to Synthesizing Invariants for Incomplete Verification Engines

    Get PDF

    Invariant Synthesis for Incomplete Verification Engines

    Get PDF
    We propose a framework for synthesizing inductive invariants for incomplete verification engines, which soundly reduce logical problems in undecidable theories to decidable theories. Our framework is based on the counter-example guided inductive synthesis principle (CEGIS) and allows verification engines to communicate non-provability information to guide invariant synthesis. We show precisely how the verification engine can compute such non-provability information and how to build effective learning algorithms when invariants are expressed as Boolean combinations of a fixed set of predicates. Moreover, we evaluate our framework in two verification settings, one in which verification engines need to handle quantified formulas and one in which verification engines have to reason about heap properties expressed in an expressive but undecidable separation logic. Our experiments show that our invariant synthesis framework based on non-provability information can both effectively synthesize inductive invariants and adequately strengthen contracts across a large suite of programs

    Scalable Anytime Algorithms for Learning Fragments

    Get PDF
    International audienceAbstract Linear temporal logic (LTL) is a specification language for finite sequences (called traces) widely used in program verification, motion planning in robotics, process mining, and many other areas. We consider the problem of learning formulas in fragments of LTL without the U\mathbf {U} U -operator for classifying traces; despite a growing interest of the research community, existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result. We introduce a new algorithm addressing both issues: our algorithm is able to construct formulas an order of magnitude larger than previous methods, and it is anytime, meaning that it in most cases successfully outputs a formula, albeit possibly not of minimal size. We evaluate the performances of our algorithm using an open source implementation against publicly available benchmarks

    Learning Interpretable Temporal Properties from Positive Examples Only

    Get PDF
    We consider the problem of explaining the temporal behavior of black-boxsystems using human-interpretable models. To this end, based on recent researchtrends, we rely on the fundamental yet interpretable models of deterministicfinite automata (DFAs) and linear temporal logic (LTL) formulas. In contrast tomost existing works for learning DFAs and LTL formulas, we rely on onlypositive examples. Our motivation is that negative examples are generallydifficult to observe, in particular, from black-box systems. To learnmeaningful models from positive examples only, we design algorithms that relyon conciseness and language minimality of models as regularizers. To this end,our algorithms adopt two approaches: a symbolic and a counterexample-guidedone. While the symbolic approach exploits an efficient encoding of languageminimality as a constraint satisfaction problem, the counterexample-guided onerelies on generating suitable negative examples to prune the search. Both theapproaches provide us with effective algorithms with theoretical guarantees onthe learned models. To assess the effectiveness of our algorithms, we evaluateall of them on synthetic data.<br
    • …
    corecore